Adaptive regularization parameter selection method for enhancing generalization capability of neural networks
نویسندگان
چکیده
منابع مشابه
Regularization Parameter Selection for Faulty Neural Networks
Regularization techniques have attracted many researches in the past decades. Most focus on designing the regularization term, and few on the optimal regularization parameter selection, especially for faulty neural networks. As is known that in the real world, the node faults often inevitably take place, which would lead to many faulty network patterns. If employing the conventional method, i.e...
متن کاملRegularization Learning of Neural Networks for Generalization
In this paper, we propose a learning method of neural networks based on the regularization method and analyze its generalization capability. In learning from examples, training samples are independently drawn from some unknown probability distribution. The goal of learning is minimizing the expected risk for future test samples, which are also drawn from the same distribution. The problem can b...
متن کاملRegularization parameter estimation for feedforward neural networks
Under the framework of the Kullback-Leibler (KL) distance, we show that a particular case of Gaussian probability function for feedforward neural networks (NNs) reduces into the first-order Tikhonov regularizer. The smooth parameter in kernel density estimation plays the role of regularization parameter. Under some approximations, an estimation formula is derived for estimating regularization p...
متن کاملSelection of Varying Spatially Adaptive Regularization Parameter for Image Deconvolution
The deconvolution in image processing is an inverse illposed problem which necessitates a trade-off between delity to data and smoothness of a solution adjusted by a regularization parameter. In this paper we propose two techniques for selection of a varying regularization parameter minimizing the mean squared error for every pixel of the image. The rst algorithm uses the estimate of the square...
متن کاملStudies of model selection and regularization for generalization in neural networks with applications
This thesis investigates the generalization problem in artificial neural networks, attacking it from two major approaches: regularization and model selection. On the regularization side, under the framework of Kullback–Leibler divergence for feedforward neural networks, we develop a new formula for the regularization parameter in Gaussian density kernel estimation based on available training da...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Artificial Intelligence
سال: 1999
ISSN: 0004-3702
DOI: 10.1016/s0004-3702(98)00115-5